78 research outputs found

    Quantum Kinks: Solitons at Strong Coupling

    Full text link
    We examine solitons in theories with heavy fermions. These ``quantum'' solitons differ dramatically from semi-classical (perturbative) solitons because fermion loop effects are important when the Yukawa coupling is strong. We focus on kinks in a (1+1)(1+1)--dimensional ϕ4\phi^4 theory coupled to fermions; a large-NN expansion is employed to treat the Yukawa coupling gg nonperturbatively. A local expression for the fermion vacuum energy is derived using the WKB approximation for the Dirac eigenvalues. We find that fermion loop corrections increase the energy of the kink and (for large gg) decrease its size. For large gg, the energy of the quantum kink is proportional to gg, and its size scales as 1/g1/g, unlike the classical kink; we argue that these features are generic to quantum solitons in theories with strong Yukawa couplings. We also discuss the possible instability of fermions to solitons.Comment: 21 pp. + 2 figs., phyzzx, JHU-TIPAC-92001

    A limited-size ensemble of homogeneous CNN/LSTMs for high-performance word classification

    Get PDF
    The strength of long short-term memory neural networks (LSTMs) that have been applied is more located in handling sequences of variable length than in handling geometric variability of the image patterns. In this paper, an end-to-end convolutional LSTM neural network is used to handle both geometric variation and sequence variability. The best results for LSTMs are often based on large-scale training of an ensemble of network instances. We show that high performances can be reached on a common benchmark set by using proper data augmentation for just five such networks using a proper coding scheme and a proper voting scheme. The networks have similar architectures (convolutional neural network (CNN): five layers, bidirectional LSTM (BiLSTM): three layers followed by a connectionist temporal classification (CTC) processing step). The approach assumes differently scaled input images and different feature map sizes. Three datasets are used: the standard benchmark RIMES dataset (French); a historical handwritten dataset KdK (Dutch); the standard benchmark George Washington (GW) dataset (English). Final performance obtained for the word-recognition test of RIMES was 96.6%, a clear improvement over other state-of-the-art approaches which did not use a pre-trained network. On the KdK and GW datasets, our approach also shows good results. The proposed approach is deployed in the Monk search engine for historical-handwriting collections

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Affective Reasoning for Big Social Data Analysis

    No full text
    This special section focuses on the introduction, presentation, and discussion of novel techniques that further develop and apply affective reasoning tools and techniques for big social data analysis. A key motivation for this special section, in particular, is to explore the adoption of novel affective reasoning frameworks and cognitive learning systems to go beyond a mere word-level analysis of natural language text and provide novel concept-level tools and techniques that allow a more efficient passage from (unstructured) natural language to (structured) machine-processable affective data, in potentially any domain. The selected papers aim to address the wide spectrum of issues related to affective computing research and, hence, better grasp the current limitations and opportunities related to this fast-evolving branch of artificial intelligence. Out of the 29 submissions received, 5 were accepted to appear in the special section. One of the accepted papers underwent 3 rounds of revisions, the rest were revised twice. The papers appearing in this issue are briefly summarized

    More than words: inference of socially relevant information from nonverbal vocal cues in speech

    Get PDF
    This paper presents two examples of how nonverbal communication can be automatically detected and interpreted in terms of social phenomena. In particular, the presented approaches use simple prosodic features to distinguish between journalists and non-journalists in media, and extract social networks from turn-taking to recognize roles in different interaction settings (broadcast data and meetings). Furthermore, the article outlines some of the most interesting perspectives in this line of research

    Implicitly and intelligently influencing the interactive experience

    Get PDF
    Enabling intuitive interaction in system design remains an art more than a science. This difficulty is exacerbated when the diversity of device and end user group is considered. In this paper, it is argued that conventional interaction modalities are unsuitable in many circumstances and that alternative modalities need be considered. Specifically the case of implicit interaction is considered, and the paper discusses how its use may lead to more satisfactory experiences. Specifically, harnessing implicit interaction in conjunction with the traditional explicit interaction modality, can enable a more intuitive and natural interactive experience. However, the exercise of capturing and interpreting implicit interaction is problematic and is one that lends itself to the adoption of AI techniques. In this position paper, the potential of lightweight intelligent agents is proposed as a model for harmonising the explicit and implicit components of an arbitrary interaction.Science Foundation Ireland12M embargo until January 2012 - AV 10/05/201
    corecore